Search Results: "fog"

24 January 2013

Benjamin Mako Hill: Aaron Swartz

I moved to Boston in 2005 at the same time that Aaron Swartz did and we were introduced by a mutual friend. Aaron was one of my first friends in Boston and we became close. When Aaron moved to San Francisco, I moved into his apartment in Somerville where he kept a room for a year or so. Mika and I still live there. His old posters remain on our walls and his old books remain on our shelves. Aaron s brothers Ben and Noah both lived with us and remain close friends. I have spent hours (days?) reading and thinking about Aaron over the last two weeks. It has been disorienting but beautiful to read the descriptions of, and commentaries on, Aaron s life. Although I suspect I may never feel ready, there are several things I want to say about Aaron s death, about Aaron s work, and about what Aaron means to me. 1. Aaron s Death The reaction to Aaron s death has been overwhelming and inspirational. At some point in the near future I plan to join some of the important campaigns already being waged in his name. There are many attempts to understand why Aaron died and many attempts to prevent it from happening to others in the future. Unfortunately, I am familiar with the process of soul-searching and second-guessing that happens when a friend commits suicide. I m sure that every one of his friends has asked themselves, as I have, What could I have done differently? I don t know the answer, but I do know this: Aaron was facing the real risk of losing half his life to prison. But even if one believed that he was only facing the likely loss of ten percent or even one percent of his life I wish that we all, and I wish that I in particular, had reacted with the passion, time, anger, activity, and volume proportional to how we have reacted in the last two weeks when he lost the whole thing. 2. Aaron s Work Of course, Aaron and I worked on related projects and I followed his work. And despite all the incredible things that have been said about Aaron, I feel that Aaron s work was more focused, more ambitious, more transformative, more innovative, and more reckless (in a positive sense) than the outpour online suggests. Although discussion of Aaron has focused his successes, achievements, and victories, the work that inspired me most was not the projects that were most popular or successful. Much of Aaron s work was deeply, and as it turned out overly, ambitious. His best projects were self-conscious attempts to transform knowledge production, organization, and dissemination. Although he moved from project to project, his work was consistently focused on bringing semantic-web concepts and technologies to peer production, to the movement for free culture, and to progressive political activism and on the meta-politics necessary to remove barriers to this work. For example, Aaron created an online collaborative encyclopedia project called the TheInfoNetwork (TIN) several years before Wikipedia was started. I talked to Aaron at length about that project for a research project I am working on. Aaron s work was years ahead of its time; in 2000, TIN embraced more of the Wikimedia Foundation s current goals and principles than Wikipedia did when it was launched. While Wikipedia sought to create a free reference work online, Aaron s effort sought to find out what a reference work online could look like. It turned out to be too ambitious, perhaps, but it taught many, including myself, an enormous amount in that process. When I met Aaron, he was in the process starting a company, Infogami, that was trying to chase many of TIN s goals. Infogami was conceived of as a wiki aware of the structure of data. The model was both simple and profound. Years later, Wikimedia Deutschland s WikiData project is beginning to bring some of these ideas to the mainstream. Infogami merged with Reddit as equal halves of a company with a shared technological foundation based on some of Aaron s other work. But when Reddit took off, Infogami was rarely mentioned, even by Aaron. I think that is too bad. Reddit got traction because it made the most popular stuff more visible; Reddit is popular, fundamentally, because popular things are popular. But popular is not necessary positive. For that reason, Reddit never struck me as either surprising or transformative. But what started as Aaron s half the company, on the other hand, aimed to create a powerful form of democratized information production and dissemination. And although Infogami didn t take off, the ideas and code behind the project found life at the heart of Open Library and will continue to influence and inspire countless other projects. I believe that Infogami s lessons and legacy will undergird a generation of transformative peer production technologies in a way that the Reddit website important as it is will not. 3. What Aaron Means to Me A lot of what has been written about Aaron speaks to his intelligence, his curiosity, his generosity, his ethics and his drive. Although I recognize all these qualities in the Aaron I knew, I ve felt alienated by how abstract some of the discussion of Aaron has been my memories are of particularities.
I remember the time Aaron was hospitalized and I spent two hours on the phone going through my bookshelves arguing with him about the virtues of the books in my library as we tried to decide which books I would bring him. I remember Aaron confronting Peter Singer intellectual founder of the modern animal rights movement at the Boston Vegetarian Food Festival to ask if humans had a moral obligation to stop animals from killing each other. I lurked behind, embarrassed about the question but curious to hear the answer. (Singer sighed and said yes sort of and complemented Aaron on the enormous Marxist commentary he was carrying.) I remember 1-800-INTERNET.com. I remember talking with Aaron about whether being wealthy could be ethical. I argued it could not but Aaron argued uncharacteristically I thought that it could. Aaron told Mika she should slap him if he ever became wealthy. The very next day, it was announced that his company had been acquired and that Aaron was a millionaire. I remember the standing bets I had with Aaron and how he would email me every time news reports favored his claims (but never when they did not). And I remember that I won t hear from him again.
Aaron was a friend and inspiration. I miss him deeply and I am very sad.

17 January 2013

Tollef Fog Heen: Gitano git hosting with ACLs and other shininess

gitano is not entirely unlike the non-web, server side of github. It allows you to create and manage users and their SSH keys, groups and repositories from the command line. Repositories have ACLs associated with them. Those can be complex ("allow user X to push to master in the doc/ subtree) or trivial ("admin can do anything"). Gitano is written by Daniel Silverstone, and I'd like to thank him both for writing it and for holding my hand as I went stumbling through my initial gitano setup. Getting started with Gitano can be a bit tricky, as it's not yet packaged and fairly undocumented. Until it is packaged, it's install from source time. You need luxio, lace, supple, clod, gall and gitano itself. luxio needs a make install LOCAL=1, the others will be installed to /usr/local with just make install. Once that is installed, create a user to hold the instance. I've named mine git, but you're free to name it whatever you would like. As that user, run gitano-setup and answer the prompts. I'll use git.example.com as the host name and john as the user I'm setting this up for. To create users, run ssh git@git.example.com user add john john@example.com John Doe, then add their SSH key with ssh git@git.example.com as john sshkey add workstation < /tmp/john_id_rsa.pub. To create a repository, run ssh git@git.example.com repo create myrepo. Out of the box, this only allows the owner (typically "admin", unless overridden) to do anything with it. To change ACLs, you'll want to grab the refs/gitano/admin branch. This lives outside of the space git usually use for branches, so you can't just check it out. The easiest way to check it out is to use git-admin-clone. Run it as git-admin-clone git@git.example.com:myrepo ~/myrepo-admin and then edit in ~/myrepo-admin. Use git to add, commit and push as normal from there. To change ACLs for a given repo, you'll want to edit the rules/main.lace file. A real-world example can be found in the NetSurf repository and the lace syntax might be useful. A lace file consists of four types of lines: Rules are processed one by one, from the top and terminate whenever a matching allow or deny is found. Conditions can either be matches to an update, such as ref refs/heads/master to match updates to the master branch. To create groupings, you can use the anyof or allof verbs in a definition. Allows and denials are checked against all the definitions listed and if all of them match, the appropriate action is taken. Pay some attention to what conditions you group together, since a basic operation (is_basic_op, aka op_read and op_write) happens before git is even involved and you don't have a tree at that point, so rules like:
define is_master ref refs/heads/master
allow "Devs can push" op_is_basic is_master
simply won't work. You'll want to use a group and check on that for basic operations and then have a separate rule to restrict refs.

9 November 2012

Gunnar Wolf: Road trip to ECSL 2012 in Guatemala

Encuentro Centroamericano de Software Libre! Guatemala! During a national (for us) holiday, so it's easy to go without missing too much work time! How would I miss the opportunity? Several years ago, I started playing with the idea of having a road trip Probably this was first prompted with the UK crew and the Three Intrepid Motorcycle Riders arriving by land to DebConf 9 I don't know. Fact is, I wanted to go to DebConf10 in New York by land, as well as to DebConf12 in Nicaragua. Mostly due to a lack of time, I didn't Although we did start making some longish trips. Of course, my desire to show Regina what Mexico is like also helped! So, up until a week ago, our (according to my standards) long distance driving experience included:
  • M xico Guanajuato Puerto Vallarta Guanajuato M xico, in early November 2011, for Festival de Software Libre and with Regina and our Blender friends Octavio and Claudia. Totalling almost 1900Km, mostly consisting of wide, toll highway.
  • M xico Xilitla San Luis Potos M xico, in April 2012, just for fun and for a nice short vacation, alone with Regina. Totalling almost 1200Km, but through Sierra Gorda de Quer taro, a very tough stretch of about 250Km which we did at about 50Km/h on average. Beautiful route for sure! We didn't originally intend to go through San Luis Potos , and it does not appear to make much sense, as it adds ~350Km to the total, but it was even quicker than going back by the same route and according to those who now, even faster than our planned route, via Tamazunchale and Ixmiquilpan!
  • M xico San Luis Potos Zacatecas Aguascalientes Guanajuato M xico, in May 2012, for Congreso Internacional de Software Libre, again with Octavio and Claudia. Totalling 1250Km, and following very good roads, although most of them were toll-free.
But there is always a certain halo over crossing a border, maybe more so in countries as large as Mexico. We convinced Pooka and Moni, and granted, with some aprehension, as we knew of some important security risks in the more rural areas we wanted to go through we decided to go to Guatemala. And, although we wanted to go with a bit more time, Real Life took its toll: We could not take more time than the intersection of what our respective jobs offered. So, here goes a short(?) recap of our six day long, 3200Km trip. Of course, we have a map detailing this. Mexico Veracruz I came to my office early on Wednesday (31-oct), and left with Regina around 10AM towards Veracruz. We agreed to meet there with Moni and Pooka, who would take the night bus, and continue together. Crossing Mexico City proved to be the longest obstacle We arrived to Veracruz already past 3PM, and spent a nice evening walking down the center and port of the city. Veracruz port can still be seen as part of central Mexico; I knew the road quite well. Veracruz San Andr s Tuxtla Catemaco San Cristobal de las Casas We met with our friends at the iconic Gran Caf de la Parroquia at 6:30AM. Had a nice breakfast with coffee, and by 7:30 we were heading south-west. The reason to have a road trip was to get to know the route, to enjoy the countryside So, given we "only" had to make 650Km this day, we took the non-toll road A narrow path stretching along the coastal plains of Veracruz, until Acayucan. Doing so, we also saved some money, as the equivalent toll road is around MX$300 (~US$25)! Veracruz is a hot state. We ended up all sweaty and tired by 19:00, when we reached San Cristobal. We had agreed not to drive at night, due to security issues, but fortunately there was quite a bit of traffic both ways between Tuxtla Guti rrez (Chiapas State capital, around 1hr from San Cristobal, where darkness got us) and our destination, so we carried on. Now, San Cristobal is a high city, almost as high as Mexico City (2100m), and being more humid, it was quite chilly. We went for a walk, and were convinced that at a later time, we had to stay for several days there. The city is beautiful, the region is breath-taking, there is a lot of great handicrafts as well, and it's overall very cheap. Really lovely place. San Cristobal de las Casas Cd. Cuauht moc La Mesilla Guatemala Once again, this day started early. We woke up ready to leave at 7AM, and not earlier because the hotel's parking didn't open earlier. After a very quick visit to San Cristobal downtown, to take some photos that were not right the night before, we took the road to Comit n, stopping just for some tamales de bola y chipil n for breakfast. Central Chiapas is almost continuously populated, differing from most of my experience in Mexico. It is all humid, and has some very beautiful landscapes. We passed Comit n, which is a much larger city than what we expected, went downhill after La Trinitaria, crossed a plain, and continued until hills started taking over again. We stopped in a very chaotic, dirty place: Just accross the border, where Ciudad Cuauht moc becomes La Mesilla. This border was basically what we expected: There is no half-official place to exchange money, so we had to buy quetzales from somebody who offered them on the street, at MX$2 per Q1 (where the real exchange should be around 1.50 to 1). While on the road, I was half-looking for exchange posts in Comit n and onwards, and found none (and being a festive day, they would probably be closed anyway). But we were expecting this, after all, and exchanged just the basic minimum: MX$600 (US$50, which by magic became Q300, US$40). The tramit consists of:
  • Spraying the car against diseases (which has a cost of Q18)
  • Each of us has to go through migration. Note, in case you cross this border: We didn't expressly cross Mexican migration, so officially there was no record of us going out. Be sure to go through migration to avoid problems at re-entry!
    Migration has no cost.
  • Customs. As we were entering by car, I had to purchase a permit for circulation. I don't remember the exact quote, but it was around Q150, and the permit is valid for 90 days.
  • That's it! Welcome to Guatemala!
La Mesilla is in Guatemala's Huehuetenango Department, and from all of the Departments we crossed until Guatemala city (Huehuetenango, Quetzaltenango, Totonicap n, Solol , Chimaltenango, Sacatep quez and Guatemala), this is the largest one. Huehuetenango is home to the Cuchumatanes mountain ridge. We found beautiful, really steep, really fertile mountains. It is plainly amazing: Mountains over 60 , and quite often full with agricultural use Even at their steepest points! The CA-1 highway is, in general, in very good shape. There are however many (many, many) speed bumps (or topes, in Mexican terminology. Or t mulos in Guatemalan), at least a couple at every village we crossed, not always painted. The road is narrow and quite winding; it follows river streams for most of the way. We feared it would be in much worse shape, from what we have heard, but during the whole way we found only three points where the road was unusable due to landslides and an alternative road was always in place when we needed it. After Totonicap n, the narrow road becomes a wide (four lane) highway. Don't let that fool you! It still goes through the center of every village along the road, so it's really not meant for speeding. Also, even though the pavement is in very good condition, it is really steep quite often. It is not the easiest road to drive, but it's (again) by far not as bad as we expected. We arrived to Guatemala City as dawn was falling, and got promptly lost. Guatemala has a very strange organization scheme: The city is divided in several zones, laid out in a swirl-like fashion. East-west roads are called Calle and North-south roads are called Avenida (except for zona 4, I think, where they are diagonal, and some are Rutas while the others are V as). I won't go into too much detail). Thing is, many people told us it's a foolproof design, and people from different countries understand the system perfectly. We didn't... At least not when we arrived. We got quite lost, and it took us around one hour to arrive to our hotel, at almost 19:00 Almost 12 hours since we left San Cristobal. Went for a quick dinner, and then waited for our friends to arrive after the first day of work of ECSL, which we missed completely. And, of course, we were quite tired, so we didn't stay up much longer. Antigua Guatemala On Saturday, ECSL's activities started after 14:00 so we almost-kidnapped Wences, the local organization lead, and took him to show us around Antigua Guatemala. Antigua was the capital of Guatemala until an earthquake destroyed it in the 1770s; the capital was moved to present-day Guatemala city, but Antigua was never completely abandoned. Today, it is a world heritage site, a beautiful city, where we could/should have stayed for several days. But we were there for the conference, so we were in Antigua just a couple of hours, and headed back to Guatemala. Word of caution: Going from Guatemala to Antigua, we went down via the steepest road I have ever driven. Again, a real four-lane highway... but quite scary! The main focus for this post is to give some roadtrip advice to potential readers... So, this time around, I won't give much detail regarding ECSL. It was quite interesting, we had some very good discussions... but it would take me too much space to talk about it! The road back: Guatemala Tec n Um n; Cd. Hidalgo Arriaga So, about the road back: Yes, we just spent three days getting to Guatemala City. We were there only for ~36 hours. And... We needed to be here by Tuesday morning no matter what. So, Sunday at noon we said goodbye to our good friends in ECSL and started the long way back. To get to know more of Guatemala, we went back by the CA-2 highway, which goes via the coastal plains Not close to the Pacific ocean, which we didn't get to see at all, but not through the mountains. To get to CA-2, we took CA-9 from Guatemala. If I am not mistaken, this is the only toll road in Guatemala (at least, the only we used, and we used some pretty good highways!) It is not expensive; I don't remember right now, but must have been around Q20 (US$3). Went South past Palin and until CA-2, just outside Escuintla city, and headed West. All of Escuintla and Suchitep quez it is again a four lane highway; somewhere in Retalhueu it becomes a two lane highway. We were strongly advised not to take this road at night because, as the population density is significantly lower than in CA-1, it can get lonely at times And there are several reports of robberies. We did feel the place much less populated, but saw nothing suspicious in any way. Something important: There are almost no speedbumps in CA-2! The terrain stayed quite flat and easy as we crossed Quetzaltenango, and only in San Marcos we found some interesting hills and a very strong rain that would intermitetntly accompany us for the rest of the ride. So, we finally arrived to the border city of Tec n Um n at around 16:30 Approximately four hours after leaving the capital. The Tec n Um n Cd. Hidalgo cities and border pass are completely different from the disorderly and dirty Cd. Cuauht moc La Mesilla ones. The city of Tec n Um n could be just a nice town anywhere in the country, it does not feel aggressive as most border cities I have seen in our continent. We stopped to eat at El pollo campero and headed to the border. In the Mexican side, we also saw a very well consolidated, big and ordered migration area. Migration officers were very kind and helpful As we left Cd. Cuauht moc, Regina didn't get a stamp of leaving Mexico, so technically she was ilegally out of the country (as she is not a national... They didn't care about the rest of us). The tramit to fix this was easy, simple, straightforward. We only paid for the fumigation again (MX$60, US$5), and were allowed to leave. Anyway, we crossed the border. There is a ~30Km narrow road between Cd. Hidalgo and Tapachula, but starting in Tapachula we went on Northwards via a very good, four lane and very straight highway. Even though we had agreed not to drive at night... Well, we were quite hurried and still too far from Mexico City, so we decided to push it for three more hours, following the coastline until the city of Arriaga, almost at the border between Chiapas and Oaxaca. Found a little hotel to sleep some hours and collapsed. Word of warning: This road (from Tapachula to Arriaga) is also known for its robberies. We saw only one suspicious thing: Two guys were pushing up their motorcycle, from which they had apparently fallen. We didn't stop, as they looked healthy and not much in need of help, but later on talked about this Even though this was at night, they were not moving as if they had just crashed; nothing was scratched, not the motorcycle and not their clothes. That might have been an attempt to mug us (or whoever stopped by). This highway is very lonely, and the two directions are separated by a wall of vegetation, so nobody would have probably seen us were we to stop for some minutes. Be aware if you use this road! The trip comes to an end: Arriaga Niltepec Istmo C rdoba M xico The next (last, finally!) day, we left at 6:30AM. After driving somewhat over one hour, we arrived to Niltepec, where a group of taxi drivers had the highway closed as a protest against their local government's tolerance of mototaxis. We evaluated going back to Arriaga and continue via the Tuxtla Guti rrez highway, but that would have been too long. We had a nice breakfast of tlayudas (which resulted in Pooka getting an alergic reaction shortly afterwards) and, talking with people here and there, were told about an alternative route by an agricultural road that surrounds the blockade. So, we took this road the best way we could, and after probably 1hr of driving at 20Km/h, finally came back to the main road. We planned on crossing the isthmus using the Acayucan-Juchit n road We were amazed at the La Ventosa ("the windy") area, where we crossed a huge eolic plant for electricity generation, so of course we got our good share of photos. From then onwards, not much more worth mention. Crossed the isthmus via a quite secondary road in not too good shape (although there is a lot of machinery, and the road will much likely improve in the next few months/years), then took the toll freeway along Veracruz until C rdoba. We stopped for a (delilcious and revigorizing!) cup of coffee in Hotel Zeballos, where Agust n de Iturbide signed with Viceroy Juan O'Donoj the treaties that granted Mexico the independence. Traveller, beware: When crossing between Puebla and Veracruz, there is a steep slope of almost 1000m where , you will almost always (except if it's close to noon) find very thick fog; taking the highway from C rdoba, this is in the region known as Cumbres de Maltrata. We had the usual fog, and just as we left it, a thin but constant rain that went on with us up until we got home. Crossed Puebla state with no further eventualities, and arrived to Pooka and Moni's house by 22:00. Less than one hour later, Regina and me arrived home as well. This was four days ago... and I have finally finished writing it all down ;-) Hope you find this useful, or if not, at least entertaining! If you read this post in my blog, you will find many pictures taken along the trip below (Well, if you are reading the right page, not in the general blog index...). If you are reading from a planet or other syndication service... Well, come to the blog! Dreamhost woes Oh, and... Yes, it sometimes happens: My blog is hosted at Dreamhost. This means that usually it works correctly... But sometimes, specially when many people request many nontrivial pages, it just gives an error. If you get an error, reload once or twice... Or until your patience manages ;-)

6 November 2012

Russ Allbery: Review: Cerebus

Review: Cerebus, by Dave Sim
Series: Cerebus #1
Publisher: Aardvark-Vanaheim
Copyright: August 1987
Printing: July 2003
ISBN: 0-919359-08-6
Format: Graphic novel
Pages: 546
Cerebus is something of a legend in comics. Begun in December of 1977 by Dave Sim, it was one of the first entirely independent, self-published comics in a field dominated by the large work-for-hire companies like Marvel and DC. It ran for 300 issues and nearly 27 years and became one of the most influential independent comic books of all time, in part due to Sim's outspoken views in favor of creator rights and his regular use of the editorial pages in Cerebus issues to air those views. This collection (the first "phonebook") collects issues 1 through 25, with one of the amazing wrap-around covers that makes all of the phonebooks so beautiful (possibly partly by later Cerebus collaborator Gerhard, although if so it's uncredited so far as I can tell). Cerebus reliably has some of the best black-and-white art you will ever see in comics. There is some debate over where to start with Cerebus, and a faction that, for good reasons, argues for starting with the second phonebook (High Society). While these first twenty-five issues do introduce the reader to a bunch of important characters (Elrod, Lord Julius, Jaka, Artemis Roach, and Suenteus Po, for example), all those characters are later reintroduced and nothing that happens here is hugely vital for the overall story. It's also quite rough, starting as Conan parody with almost no depth. The first half or so of this collection features lots of short stories with little or no broader significance, and the early ones are about little other than Cerebus's skills and fighting abilities. That said, when reading the series, I like to start at the beginning. It is nice to follow the characters from their moment of first introduction, and it's delightful to watch Sim's ability grow (surprisingly quickly) through the first few issues. Cerebus #1 is bad: crude, simplistic artwork, almost nothing in the way of a story, and lots of purple narration. But flipping forward even to Cerebus #6 (the first appearance of Jaka), one sees a remarkable difference. By Cerebus #7, Cerebus looks like himself, the plot is getting more complex, and Sim is clearly hitting his stride. And, by the end of this collection, the art has moved from crude past competent and into truly beautiful in places. It's one of the few black-and-white comics where I never miss color. The detailed line work is more enjoyable than I think any coloring could be. The strength of Cerebus as an ongoing character slowly emerges from behind the parody. What I like the most about Cerebus is that he's neither a predestined victor (apart from the early issues that follow the Conan model most closely) nor a pure loner who stands apart from the world. He gets embroiled in political affairs, but almost always for his own reasons (primarily wealth). He has his own moral code, but it's fluid and situational; it's the realistic muddle of impulse and vague principle that most of us fall back on in our everyday life, which is remarkably unlike the typical moral code in comics (or even fiction in general). And while he is in one sense better and more powerful than anyone else in the story, that doesn't mean Cerebus gets what he wants. Most stories here end up going rather poorly for him, forcing daring escapes or frustrating cutting of losses. Sim quickly finds a voice for Cerebus that's irascible, wise, practical, and a bit world-weary, as well as remarkably unflappable. He's one of the best protagonists in comics, and that's already clear by the end of this collection. Parody is the focus of these first issues, which is a mixed bag. The early issues are fairly weak sword-and-sorcery parody (particularly Red Sonja, primarily a vehicle for some tired sexist jokes) and worth reading only for the development in Sim's art style and the growth of Cerebus as a unique voice. Sim gets away from straight parody for the middle of the collection, but then makes an unfortunate return for the final few issues, featuring parodies of Man-Thing and X-Men that I thought were more forced than funny. You have to have some tolerance for this, and (similar to early Pratchett) a lot of it isn't as funny as the author seems to think it is. That said, three of Sim's most brilliant ongoing characters are parodies, just ones that are mixed and inserted into the "wrong" genres in ways that bring them alive. Elrod of Melvinbone, a parody of Moorcock's Elric of Melnibone who speaks exactly like Foghorn Leghorn, should not work and yet does. He's the source of the funniest moments in this collection. His persistant treatment of Cerebus as a kid in a bunny suit shouldn't be as funny as it is, but it reliably makes me laugh each time I re-read this collection. Lord Julius is a straight insertion of Groucho Marx who really comes into his own in the next collection, High Society, but some of the hilarious High Society moments are foreshadowed here. And Artemis Roach, who starts as a parody of Batman and will later parody a huge variety of comic book characters, provides several delightful moments with Cerebus as straight man. I'm not much of a fan of parody, but I still think Cerebus is genuinely funny. High Society is definitely better, but I think one would miss some great bits by skipping over the first collection. Much of what makes it work is the character of Cerebus, who is in turn a wonderful straight man for Sim's wilder characters and an endless source of sharp one-liners. It's easy to really care about and root for Cerebus, even when he's being manipulative and amoral, because he's so straightforward and forthright about it. The world Sim puts him into is full of chaos, ridiculousness, and unfairness, and Cerebus is the sort of character to put his head down, make a few sarcastic comments, and then get on with it. It's fun to watch. One final note: I've always thought the "phonebook" collections were one of Sim's best ideas. Unlike nearly all comic book collections, a Cerebus phonebook provides enough material to be satisfying and has always felt like a good value for the money. I wish more comic book publishers would learn from Sim's example and produce larger collections that aren't hardcover deluxe editions (although Sim has an admitted advantage from not having to reproduce color). Followed by High Society. Rating: 7 out of 10

11 October 2012

Eddy Petri&#537;or: A shitstorm is comming

It has been brought to my attention that a company selling some so-called colon cleansing product wanted to threat with a law suit a Romanian skeptical blogger because he wrote some articles showing that any such products (the one produced by the said company is the most known/popular in Romania) are pure quackery and there is no scientific basis for the claim they make in order to promote their products.

In his articles he also explained how, in fact, the mucoid plaque, the thing that supposedly proves the efficiency of the product, it is a result of taking the product due to its ingredients, and how no such mucoid plaque was ever observed in any colonoscopy, colon surgery or any other situation where you'd expect it to be seen. He also quoted specialists and lots of other scientific references, showing an honest approach to the issue.

As a response to the initial take-down message from the company doing business with people's crap, the blogger said would like to see scientific proof for the claims made for the product, and when that was to happen, he would take down the articles and publish a correction.

The company decided that the best way to continue this was to try to make a legal threat and ask 100.000 euros (one hundred thousands Euros) as damage in a country where, according to the latest data from the National Statistics Institute, the total average monthly personal income is about 180 Euros.

The blogger, as a reply, decided the threat should be made public and wrote another article which probably made the company very unhappy, because they decided to sue Wordpress so they would take down the blog.

And that's exactly what they did, they sued Wordpress, and sent some documents to Wordpress who sent them to the blogger. Among the documents there were 4 pdf files containing each an original article (in Romanian) from the blog and only 3 pdf containg English translation for only 3 of them. The one missing was the one where the blogger himself showed there wasn't any legal basis for the threats they made initially against him.

Here are the translations (ironically, made on the company's own expense):

Initial article entitled "ColonHelp doesn't help the colon. But it empties your wallet!" (original here)
Initial Article.en<iframe class="scribd_iframe_embed" data-aspect-ratio="0.701030927835051" data-auto-height="true" frameborder="0" height="600" id="doc_83859" scrolling="no" src="http://www.scribd.com/embeds/109756494/content?start_page=1&amp;view_mode=scroll&amp;access_key=key-19l4sbejfk4l4na4feoe" width="100%"></iframe>



The second article entitled "Again about ColonHelp and intestinal cleansers" (original here)
Follow Up.en<iframe class="scribd_iframe_embed" data-aspect-ratio="0.70264064293915" data-auto-height="true" frameborder="0" height="600" id="doc_46722" scrolling="no" src="http://www.scribd.com/embeds/109757646/content?start_page=1&amp;view_mode=scroll&amp;access_key=key-1d0z2779liaialmv90r5" width="100%"></iframe>



** Missing translation of the first reply to threats (original Romanian text here)



The second article about the threats entitled "People who clean the colon have filled the fan with shit" (original here)
Threats 2<iframe class="scribd_iframe_embed" data-aspect-ratio="0.705069124423963" data-auto-height="true" frameborder="0" height="600" id="doc_72956" scrolling="no" src="http://www.scribd.com/embeds/109758037/content?start_page=1&amp;view_mode=scroll&amp;access_key=key-2d3c3eq1d6lo313fogho" width="100%"></iframe>


The Romanian blogger explains himself more of the details on this issue in his latest article on his blog.

The company is called Zenyth Pharmaceuticals and Wordpress will probably lose the lawsuit by not presenting themselves in any way in the Romanian courts, but I think some Streisand effect would really help the asses of this company to get them kicked in their rightful place, at the top of the hall of shame.

The product name is called ColonHelp.

Please spread this information as wide as possible.
Do NOT link to the company's site (it would raise its search engine rank), but link to the blogger' article or the translations.


If any Romanian speaker cares to translate the untranslated article and publish it somewhere on the web, I would be more than glad to update this article and add a link to that translation.

4 September 2012

Tollef Fog Heen: Driving Jenkins using YAML and a bit of python

We recently switched from Buildbot to Jenkins at work, for building Varnish on various platforms. Buildbot worked-ish, but was a bit fiddly to get going on some platforms such as Mac OS and Solaris. Where buildbot has a daemon on each node that is responsible for contacting the central host, Jenkins uses SSH as the transport and centrally manages retries if a host goes down or is rebooted. All in all, we are pretty happy with Jenkins, except for one thing: The job configurations are a bunch of XML files and the way you are supposed to configure this is through a web interface. That doesn't scale particularly well when you want to build many very similar jobs. We want to build multiple branches, some which are not public and we want to build on many slaves. The latter we could partially solve with matrix builds, except that will fail the entire build if a single slave fails with an error that works on retry. As the number of slaves increases, such failures become more common. To solve this, I hacked together a crude tool that takes a yaml file and writes the XML files. It's not anywhere near as well structured and pretty as liw's jenkinstool, but it is quite good at translating the YAML into a bunch of XML files. I don't know if it's useful for anybody else, there is no documentation and so on, but if you want to take a look, it's on github. Feedback is most welcome, as usual. Patches even more so.

23 July 2012

Tollef Fog Heen: Automating managing your on-call support rotation using google docs

At work, we have a rotation of who is on call at a given time. We have few calls, but they do happen and so it's important to ensure both that a person is available, but also that they're aware they are on call (so they don't stray too far from their phone or a computer). In the grand tradition of abusing spreadsheets, we are using google docs for the roster. It's basically just two columns, one with date and one with user name. Since the volume is so low, people tend to be on call for about a week at a time, 24 hours a day. Up until now, we've just had a pretty old and dumb phone that people have carried around, but that's not really swish, so I have implemented a small system which grabs the current data, looks up the support person in LDAP and sends SMSes when people go on and off duty as well as reminding the person who's on duty once a day. If you're interested, you can look at the (slightly redacted) script.

26 February 2012

Iustin Pop: First running event of 2012

Today I had the first running race of 2012. Well, actually, it was my first race in Switzerland ever so double first. And, if I have to be precise, it was my first run with a shiny new Garmin 910XT; while I had it ordered for more than a month, out-of-stock and other delays converged such that I received the new toy the evening before the run. After getting it, I walked around a bit to confirm it's not DOA, read the manual, and hoped it's not too different in basic operation to my old Garmin. So a triple first. Also, (this is the last one, I promise!) this was the first time I ran with a foot pod (pedometer), so it was the first time I got actual cadence measurements. The run I went to the Bremgarter Reusslauf, which is an 11km flat run held in the woods near Bremgarten, a very very nice small town. I have never before visited it (even though it's quite close to Z rich), and I was pleasantly surprised by it. I should go sometimes back and visit a bit more. The weather was cloudy (and just a tiny bit foggy), a bit cold but not much. From this point of view, it was good running weather, but it detracted from the scenery. Otherwise, the course itself was very very enjoyable, I could have used a camera lots. At one point, in the middle of the (light, not dense) woods, the course passed beneath the arm of an excavator which was like a mechanical arch (in the middle of nature); at another point, we crossed the river, and the sight was so nice I had to stop myself from stopping and admiring the view . The course is marked as a 'flat' one, but I've read some blog posts saying that this is not entirely true, so I was prepared for 'the worst'. Actually, I found it to be surprisingly well behaved; even though the finish was near the start, it seemed to me to be 60% downhill, 30% flat and only 10% somewhat-uphill. Garmin 910XT I didn't quite know what to expect, upgrading from my Forerunner 205. I read the very nice review on dcrainmaker, but in practice I didn't know what to expect (review awesome, but soo detailed). The new toy seems more stylish, so I expected worse UI or behaviour. On the contrary, it worked very well; much more responsive to changes in direction/pace (my 205 used to delays tens before adjusting the pace, the 910 is very fast), and new minor features (like auto-lap) were nice (I was able to track average speed on each 1KM lap, very useful). It's a ++ on the upgrade, with the big downside that so far it doesn't seem to work under Linux . It also has a barometric altimeter, but I didn't calibrate it (explicitly); so while the absolute measurements might be off, the relative elevation changes were much more sane than with 205 (GPS is not too good at elevation changes, as far as I know). One funny thing about the delays in getting the device: before the race, another runner asks me (note: the conversation was in German, accuracy not guaranteed):
Is that the new Garmin? Yes, it is, just got it last evening. (showing me his identical watch) Ha! Me too, I got this only two days ago!
So yes, I think a few people were happy to get their new toy before the race. I also have now a heartbeat sensor, but as I never used one before, I didn't want to try it the first time during a race; so I skipped on that. The foot pod on the other hand is practically ignorable, so I took that along (sometime I think I run just because there are nice running toys ). As to accuracy, it was quite OK; it measured 10.86 km instead of the official 11 km, but most/all of the difference was in the first km of the run (inaccuracy while starting to run? who knows). I'll have to see how the watch performs while cycling (uh, I didn't do that in a long while) or swimming ( I didn't do that in an even longer while), but for running it's a good upgrade from my old watch. Results My normal run pace is about 6min/km, which is quite slow. So I didn't expect anything of the race, except to finish it. There were lots of people participating (> 2,600 in the main 11km category). I started slow, because I know I usually start too fast and get tired quickly. This allowed me to not feel the first, and only, significant hill, and after that it was or seemed all downhill (literally, I mean, not figuratively), so in the end I had (for me) a very good time. By the Garmin, I run 10.86 km in 59:01, to an average 5.26 min/km. By the official results, I had 11 km in 58:56, with a pace of 5.21 min/km. I have never before run 10k faster than 5:30, so I was very very happy. As for individual laps (1 km) as recorded by the Garmin: On the overall ranking, I ended up 362 out of 394 in my category. Hah! I'll try to remember this race simply for my own pace results, and ignore the ranking . Honestly I'm not sure how people can run twice as fast need to train more! The foot pod gave me a cadence of 83 steps per minute (average, max 88). This was quite surprising for me, I expected to be significantly below the much talked about 90 steps per minute. But anyway, I have lots of work ahead of me to improve my running speed. Overall, it was an excellent day, and I'm looking forward to the next races in/around Z rich. But I should be careful not to overdo-it, like I do whenever I get excited about something .

11 January 2012

Craig Small: MediaServer with Rygel

Rygel XVI

Image by Cayusa via Flickr

Like a lot of people, I have one of those set-top TV boxes that can record TV shows at set times. I made sure that I could get at the files (using a FTP server in this case) and that the files were some sort of common standard (MPEG 4 TS). I also have a bunch of mp3 music files. That s fine when I m on the desktop because the files are local. I wanted to make these available to anyone in the household. DLNA seemed to be a reasonably OK way of doing this, the problem was, how to get it working in Linux? A lot of the problem is that it is hard to find a DLNA only server. Sure MythTV could do it, but it needs a tv tuner or a lot of fiddling around. XMBC can also do it, but it needs to be running a GUI. I even tried mediatomb but could not get the thing to compile as the library calls to mozjs were all using deprecated functions. I just wanted a daemon that served stuff, nothing more; no fancy ui, no need for X just file serving goodness. Rygel is almost that. You could say it is a user server much like a torrent client/server. The nice thing is you can fiddle around with rygel so it becomes close to a real server. This is how I did it. First, I made a rygel user with a home directory, but disabled login. I don t like programs running root if they don t need it and rygel doesn t need it. The home directory needs to be writeable to the rygel user too otherwise the program doesn t work too well. I use /var/local/rygel as its home. For the configuration, copy /etc/rygel.conf to ~rygel/.config/rygel.conf This is the configuration file for rygel. I disabled all of the modules except MediaExport. Make sure you disable Tracker otherwise MediaExport will not work. Tracker is only useful for real users who are logged in and have dbus etc going which this user is certainly not. I made a simple rygeld file in /usr/local/sbin which basically starts the program, forks and grabs the PID to write to a pidfile. This mean it was easier to track the program in the init scripts.
#!/bin/sh
#
# Rygel daemon handling
RYGEL='/usr/bin/rygel'
RYGEL_ARGS=''
su -s /bin/sh -c "nohup $RYGEL $RYGEL_ARGS &gt; /var/local/rygel/rygel.log 2&gt;&amp;1 &amp;"
rygel
EXIT_CODE=$?
if [ $EXIT_CODE != 0 ] ; then
        return $EXIT_CODE
fi
PGROUP= ps --no-headers -o pgrp $$ 
PID= pgrep -g $PGROUP -f $RYGEL 
echo $PID &gt; /var/run/rygel.pid
exit 0
In case you were wondering, the pgrp finds the program group so the pgrep finds the right rygel process that has the same program group as the starting shell. The init script is a standard init script except the exec flag checks for /usr/bin/rygel but the start line starts /usr/local/sbin/rygeld This is because we want to kill the real rygel process but start it with the script. This setup works rather well. You do get some messages in the logfile about dbus not working but it is harmless. I tried disabling the mpris and external plugins but no matter what flag or configuration file option I tried, they would always try to start and fail with no dbus. Rygel is a a reasonably light-weight way of serving media to your home network. It idles 200 MB virtual with 16MB resident and when idle uses almost no CPU.

9 December 2011

Christian Perrier: 10 years being Debian Developer - part 5: being a newbie DD...and working on l10n

I left you 2.5 months ago with the last question asked by my applicaiton manager, Martin Michlmayr : "Please tell me about about yourself and what you intend to do for Debian". Interesting question to revisit now, indeed. Here is what I answered: About myself first. I'm a 40 year old project manager and system administrator working in French National Aerospace Research Center. My best definition of my skills in computing is "Know more or less about a Lot of Things and be a Specialist of Nothing"...:-). I'm definitely not a programmer, nor a real system administrator, nor a RDBMS administrator, nor a personal workstation designer, though I do all of these daily. I think I'm perfect for finding the good person for having a defined job done. Besides this, I'm a genealogist for several years now. This is what finally decided me to apply for becoming a package maintainer : there are some quite good free genealogy software for Unix, though for various reasons they are not used very widely, even Unix geeks (my main software for genealogy still runs on Another Operatin System and is evertythig but free).I think that I can bring something here to the Free Software World, by helping some of these good programs in getting into the best Linux distribution I know.... For me, this is a mean for giving back to the free software movement what I gives to me since I discovered Linux 6-7 years ago. My very first intention as soon as I get my way into the Debian Developers Heaven is adopting the Geneweb package currently maintained by Brent Flugham. I'm in close contact with the author (who happens to be french, which helps) as well as a daily user of it. The current package which is in the distribution is already my work for a great part. I gave it to Brent, the current maintainer and we both agreed that it would be better for me to apply to becoming anofficial maintainer. I also contributed to the package for lifelines, another genealogy software. The last version of the package is also 80% my work, acknowledged by Javier, the official maintainer. Concerning that package, I do not have "plans" for adopting it (we didn't discussed of this with Javier, and I'm not sure I could bring him that much things). I came to Linux thanks to a great friend of mine, Ren Cougnenc. Ren opened my eyes to the free software world when I still thought that it was only a variant of free beers. I got really involved into Linux when I forced me to remove any other Operating System from my computer at work and tried to do my daily job with Linux. I have now succeeded at ONERA in getting free software to be accepted as a credible alternative for important projects. At this time, especially for server and network-related projects. I absolutely cannot tell why and how I came to be a Debian user. I simply don't remember. But I know why I am still a Debian user : this is a distribution which is controlled by only one organisation--->its users. And I want to be part of it. Finally, I did not mention above the somewhat "political" nature of my personal involvment into free software. Except for the physical appearence, I think I mimic RMS on several points (though he probably speaks better french than I try to speak english....which does not help for expressing complex ideas like the ones above!). As anyone can see, I was already very verbose when writing, sorry for this. Funnily Martin summed this up in one paragraph when he posted his AM report about my application. From what I see, also, my English didn't improve that much since then. It seems this is a desperatecause, I'm afraid. Anyway, all this was apparently OK for Martin and, on July 21st 2001, he wrote and posted his AM report and, on July 30th 2001, I got a mail by James Troup: An account has been created for you on developer-accessible machines with username 'bubulle'. bubulle@debian.org was born. Now I can more easily destro^W contribute to my favourite Linux distro. Indeed, I don't remember that much about the 2001-2003 years. I was probably not that active in Debian. Mostly, I was maintaining geneweb, for which I polished the package to have it reach a quite decent state, with elaborated debconf configuration. Indeed, at that time, I was still also deeply involved in genealogy research and still contributing to several mutual help groups for this. This is about the time where I did setup my web site (including pages to keep the link with our US family, which we visited in 2002). I think that the major turn in my Debian activities happened around september 2002 when Denis Barbier contacted me to add support in geneweb for a new feature he introduced in Debian : po-debconf. At that time, I knew nearly nothing about localization and internationalization. Denis was definitely one of the "leaders" in this effort in Debian. During these years, he did a tremendous job setting up tools and infrastructure to make the translation work easier. One of his achievements was "po-debconf", this set of tools and scripts that allows translation debconf "templates", the questions asked to users when configuring packages. All this lead me to discover an entire new world : the world of translating software. As often when I discover something I like, I jumped into it very deeply. Indeed, in early January 2003, I did my very first contributions to debian-l10n-french and began working on systematic translation of debconf templates. Guess what was the goal : 100%, of course! Have ALL packages that have debconf templates...translated to French. We reached that goal.....on June 2nd 2008 in unstable (indeed "virtually" : all packages were either 100% translated...or had a bug report with a complete translation) and on December 21st 2010 for testing. Squeeze was indeed the first Debian release with full 100% for French. Something to learn with localization work: it's never finished and you have to be patient. So, back in 2003, we were starting this effort. Indeed, debian-l10n-french was, at that time, an incredibly busy list and the translation rate was very high: I still remember spending my summer holidays translating 2-3 packages debconf templates every day for two weeks. Meanwhile, my packaging activities were low: only geneweb and lifelines, that was all. Something suddenly changed this and it has been the other "big turn" in my Debian life. After summer 2003, I suddenly started coming on some strange packages that were needing translation: they were popping up daily in lists with funny names like "languagechooser", "countrychooser", "choose-mirror", etc. I knew nothing about them and started "translating" their strings too, and sending bug reports after a decent review on debian-l10n-french. Then, Denis Barbier mailed me and explained me that these things were belonging to a new shiny project named Debian Installer and meant to replace the good old boot-floppies. Denis explained me that it would maybe be more efficient to work directly in the "D-I" team and "commit" my work instead of sending bug reports. Commit? What's that? You mean this wizard tool that only Real Power Developers use, named "CVS"? But this is an incredibly complicated tool, Denis. Do you really want me, the nerd DD, to play with it? Oh, and in this D-I development, I see people who are close to be semi-gods. Names I read in mailing lists and always impress me with their Knowledge and Cleverness: Martin Michlmayr (my AM, doh), Tollef Fog Heen, Petter Reinholdtsen and so many others and, doh, this impressive person named "Joey Hess" who seems to be so clever and knowledgeable, and able to write things I have no clue about. Joey Hess, really? But this guy has been in Debian forever. Me, really? Work with the Elite of Debian? Doh, doh, doh. Anyway, in about two months time, I switched from the clueless guy status to the status of "the guy who nags people about l10n in D-I", along with another fellow named Denny "seppy" Stampfer". And then we started helping Joey to release well localized D-I alphas and betas at the end of 2003 (the release rate at the time was incredible: Sarge installer beta1 in November 2003, beta2 in January 2004). I really remember spending my 2003 Christmas holidays hunting for....100% completion of languages we were supporting, and helping new translators to work on D-I translation. Yes, 8 years ago, I was already doing all this..:-)...painting the world in red. All this leads up to the year 2004. Certainly the most important year in my Debian life because it has been....the year of my first DebConf. But you'll learn about this....in another post (hopefully not in 2.5 months).

23 November 2011

Joey Hess: roundtrip latency from a cabin with dialup in 2011

alt="imagine an xkcd-style infographic here" 0 seconds 0.5 seconds 5 seconds 10 seconds 20 seconds 2 minutes 5 minutes 10 minutes 22 minutes 30 minutes 32 minutes 70 minutes 180 minutes 300 minutes

31 October 2011

Russell Coker: Links October 2011

Ron has written an interesting blog post about the US as a lottery economy [1]. Most people won t win the lottery (literally or metaphorically) so they remain destined for poverty. Tim Connors wrote an informative summary of the issues relating to traffic light timing and pedestrians/cyclists [2]. I have walked between Southgate and the Crown Casino area many times and have experienced the problem he describes many times. Scientific American has an interesting article about a new global marketplace for scientific research [3]. The concept is that instead of buying a wide range of research equipment (and hiring people to run it) you can outsource non-core research for a lower cost. Svante P bo gave an interesting TED talk about his work analysing human DNA to determine prehistoric human migration patterns [4]. Among other things he determined that 2.5% of the DNA from modern people outside Africa came from the Neandertals. Lisa wrote an informative article about Emotional Support Animals (as opposed to Service Animals such as guide dogs) for disabled people [5]. It seems that the US law is quite similar to Australian law in that reasonable accommodations have to be made for disabled people which includes allowing pets in rental properties even if such pets aren t officially ESAs. Beyond Zero Emissions has an interesting article about electricity prices which explains how wind power forces prices down [6]. This should offset the new carbon tax . Problogger has an article listing some of the ways that infographics can be used on the web [7]. This can be for blog posts or just for your personal understanding. Petter Reinholdtsen wrote a handy post about ripping DVDs which also explains how to do it when the DVD has errors [8], I haven t yet ripped a DVD but this one is worth noting for when I do. Miriam has written about the Fantastic Park ICT training for 8-12yo kids [9]. It s run in Spain (and all the links are in Spanish but Google Translation works well) and is a camp to teach children about computers and robotics using Lego Wedo among other things. We need to have more of these things in other countries. The Atlantic Cities has an interesting article comparing grid and cul-de-sac based urban designs [10]. Apparently the cul-de-sac design forces an increase in car use and therefore an increase in fatal accidents while also decreasing the health benefits of walking. Having lived in both grid and cul-de-sac based urban areas I have personally experienced the benefits of the grid based layout. Sarah Chayes wrote an interesting LA Times article about governments being taken over by corruption [11]. She argues that arbitrary criminal government leads to an increase in religious fundamentelism. Michael Lewis has an insightful article in Vanity Fair about the bankruptcy of US states and cities [12]. Ben Goldacre gave an interesting TED talk about bad medical science [13]. He starts with the quackery that is published in tabloid newspapers and then moves on to deliberate scientific fraud by medical companies. Geoff Mulgan gave an interesting TED talk about the Studio Schools in the UK which are based around group project work [14]. The main thing I took from this is that the best method of teaching varies by subject and by student. So instead of having a monolithic education department controlling everything we should have schools aimed at particular career paths and learning methods. Sophos has an interesting article about the motion sensors of smart phones being used to transcribe keyboard input based on vibration [15]. This attack could be launched by convincing a target to install a trojan application on their phone. It s probably best to regard your phone with suspicion nowadays. Simon Josefsson wrote a good article explaining how to use a GPG smart-card to authenticate ssh sessions with particular reference to running backups over ssh [16]. C ran wrote a good article explaining how to use all the screen space when playing DVDs on a wide screen display with mplayer [17]. Charles Stross has an informative blog post about Wall St Journal circulation fraud [18]. Apparently the WSJ was faking readership numbers to get more money from advertisers, this should lead to law suits and more problems for Rupert Murdoch. Is everything associated with Wall St corrupt?

21 October 2011

Tollef Fog Heen: Today's rant about RPM

Before I start, I'll admit that I'm not a real RPM packager. Maype I'm approaching this from completely the wrong direction, what do I know? I'm in the process of packaging Varnish 3.0.2 which includes mangling the spec file. The top of the spec file reads:
%define v_rc
%define vd_rc % ?v_rc:-% ?v_rc 
Apparently, this is not legal, since we're trying to define v_rc as a macro with no body. It's however not possible to directly define it as an empty string which can later be tested on, you have to do something like:
%define v_rc % nil 
%define vd_rc % ?v_rc:-% ?v_rc 
Now, this doesn't work correctly either. % ?macro tests if macro is defined, not whether it's an empty string so instead of two lines, we have to write:
%define v_rc % nil 
%if 0% ?v_rc  != 0
%define vd_rc % ?v_rc:-% ?v_rc 
%endif
The 0 ?v_rc != 0 workaround is there so that we don't accidentially end up with == 0 which would be a syntax error. I think having four lines like that is pretty ugly, so I looked for a workaround and figured that, ok, I'll just rewrite every use of % vd_rc to % ?v_rc:-% ?v_rc . There are only a couple, so the damage is limited. Also, I'd then just comment out the v_rc definition, since that makes it clear what you should uncomment to have a release candidate version. In my naivety, I tried:
# %define v_rc ""
# is used as a comment character in spec files, but apparently not for defines. The define was still processed and the build process stopped pretty quickly. Luckily, doing # % define "" seems to work fine and is not processed. I have no idea how people put up with this or if I'm doing something very wrong. Feel free to point me at a better way of doing this, of course.

5 October 2011

Tollef Fog Heen: The SugarCRM rest interface

We use SugarCRM at work and I've complained about its not-very-RESTy REST interface. John Mertic a (the?) SugarCRM Community Manager asked me about what problems I'd had (apart from its lack of RESTfulness) and I said I'd write a blog post about it. In our case, the REST interface is used to integrate Sugar and RT so we get a link in both interfaces to jump from opportunities to the corresponding RT ticket (and back again). This should be a fairly trivial exercise or so you would think. The problems, as I see it are: My first gripe is the complete lack of REST in the URLs. Everything is just sent to https://sugar/service/v2/rest.php. Usually a POST, but sometimes a GET. It's not documented what to use where. The POST parameters we send when logging in are:
method=>"login"
input_type=>"JSON"
response_type=>"JSON"
rest_data=>json($params)
$params is a hash as follows:
user_auth =>  
            user_name => $USERNAME,
            password => $PW,
            version => "1.2",
 ,
application => "foo",
Nothing seems to actually care about the value of application, nor about the user_auth.version value. The password is the md5 of the actual password, hex encoded. I'm not sure why it is, as this adds absolutely no security, but it is. This is also not properly documented. This gives us a JSON object back with a somewhat haphazard selection of attributes (reformatted here for readability):
 
     "id":"<hex session id>,
     "module_name":"Users",
     "name_value_list":  
             "user_id":  
                     "name":"user_id",
                     "value":"1"
              ,
             "user_name":  
                     "name":"user_name",
                     "value":"<username>"
              ,
             "user_language":  
                     "name":"user_language",
                     "value":"en_us"
              ,
             "user_currency_id":  
                     "name":"user_currency_id",
                 "value":"-99"
              ,
             "user_currency_name":  
                     "name":"user_currency_name",
                     "value":"Euro"
              
      
 
What is the module_name? No real idea. In general, when you get back an id and a module_name field, it tells you that the id exists is an object that exists in the context of the given module. Not here, since the session id is not a user. The worst here is the name_value_list concept which is used all over the REST interface. First, it's not a list, it's a hash. Secondly, I have no idea what would be wrong by just using keys directly in the top level object, so the object would have looked somewhat like:
 
     "id":"<hex session id>,
     "user_id": 1,
     "user_name": "<username>,
     "user_language":"en_us",
     "user_currency_id": "-99",
     "user_currency_name": "Euro"
 
Some people might argue that since you can have custom field names this can cause clashes. Except, it can't, since they're all suffixed with _c. So we're now logged in and can fetch all opportunities. This we do by posting:
method=>"get_entry_list",
input_type=>"JSON",
response_type=>"JSON",
rest_data=>to_json([
            $sid,
            $module,
            $where,
            "",
            $next,
            $fields,
            $links,
            1000
])
Why this is a list rather than a hash? Again, I don't know. A hash would make more sense to me. The resulting JSON looks like:
 
    "result_count" : 16,
    "relationship_list" : [],
    "entry_list" : [
        
          "name_value_list" :  
             "rt_status_c" :  
                "value" : "resolved",
                "name" : "rt_status_c"
              ,
             [ ]
           ,
          "module_name" : "Opportunities",
          "id" : "<entry_uuid>"
        ,
       [ ]
    ],
    "next_offset" : 16
 
Now, entry_list actually is a list here, which is good and all, but there's still the annoying name_value_list concept. Last, we want to update the record in Sugar, to do this we do:
method=>"set_entry",
input_type=>"JSON",
response_type=>"JSON",
rest_data=>to_json([
    $sid,
    "Opportunities",
    $fields
])
$fields is not a name_value_list, but instead is:
 
    "rt_status_c" : "resolved",
    "id" : "<status text>"
 
Why this works and my attempts at using a proper name_value_list didn't work? I have no idea. I think that pretty much sums it up. I'm sure there are other problems in there (such as the over 100 lines of support code for the about 20 lines of actual code that does useful work), though.

31 August 2011

Tollef Fog Heen: Bizarre slapd (and gnutls) failures

Just this morning, I was setting up TLS on a LDAP host, but slapd refused to start afterwards with a bizarre error message:
TLS init def ctx failed: -207
The key and certificate was freshly generated using openssl on my laptop (running wheezy, so OpenSSL 1.0.0d-3). After a bit of googling, I discovered that -207 is gnutls-esque for "Base64 error". Of course, the key looks just fine and decodes fine using base64, openssl base64 and even gnutls's own certtool. Now, certtool also spits out what it considers the right base64 version of the key and I noticed it differed. Using the one certtool output seems to work, though, so if you ever run into this problem try running the key through certtool --infile foo.pem -k and use the base64 representation it outputs.

18 August 2011

Rapha&#235;l Hertzog: People behind Debian: Peter Palfrader, Debian System Administrator

You might not know who Peter is because he s not very visible on Debian mailing lists. He s very active however and in particular on IRC. He was an admin of the OFTC IRC network at the time Debian switched from Freenode to OFTC. Nowadays he s a member of the Debian System Administration team who runs all the debian.org servers. If you went to a Debconf you probably met him since he s always looking for new signatures of his GPG key. He owns the best connected key in the PGP web of trust. He also wrote caff a popular GPG key signing tool. Raphael: Who are you? Peter: I m Peter Palfrader, also known as weasel. I m in my early 30s, born and raised in Innsbruck, Austria and am now living and working in Salzburg, Austria. In my copious free time, other than help running Debian s servers I also help maintaining the Tor project s infrastructure. Away from the computer I enjoy reading fiction (mostly English language Science Fiction and Fantasy), playing board games and going to the movies. Weather permitting, I also occasionally do some cycling. Raphael: How did you start contributing to Debian? Peter: I installed my first Debian the week slink came out. That was Debian 2.1 for the youngsters, in early 1999. The one thing I immediately liked about slink was that Debian s pppd supported RAS authentication which my university s dial-up system required. No way I d go back to SuSE 5.3 when I had working Internet with my Debian box. :) During that year I started getting involved in the German language Debian channel on IRCnet which got me in contact with some DDs. Christian Kurz (<shorty>) was working on Debian QA at the time and he asked my help in writing a couple of scripts. Some of that work, debcheck, still produces parts of the qa.d.o website, tho the relevance of that nowadays is probably negligible. While trying to learn more Perl earlier, I had written a program to produce syntax highlighted HTML for code snippets in various languages. I didn t really know what I was doing but it kinda worked, and probably still does since I still get mail from users every now and then. I figured that it would be really nice if people could just get my software together with Debian. According to code2html s Debian changelog the initial release of the package was done on a weekday at 2:30 in the morning early in 2000, and if my memory serves me correctly, shorty uploaded it shortly afterwards. I started packaging a couple of other piece of software and in the same year I sent my mail to the debian account managers to register my intent to become a DD. No new developers where being accepted at that time since the DAMs wanted to overhaul the entire process so I wasn t surprised to not get any immediate reply. Of course what the silence also meant was that the mail had been lost, but I only learned of that later when I took all my courage to ask DAM about the status of application a couple months later. Once that was sorted out I was assigned an AM, did the usual dance, and got my account late in November 2000. Raphael: Four years ago, the Debian System Administration team was a real bottleneck for the project and personal conflicts made it almost impossible to find solutions. You were eager to help and at some point you got dropped as a new member in that team. Can you share your story and how you managed the transition in the difficult climate at that time? Peter: Ah, that was quite the surprise for an awful lot of people, me included. Branden Robinson, who was our DPL for the 2005-2006 term, tried to get some new blood added to DSA who were at the time quite divided. He briefly talked to me on IRC some time in summer 2005, telling me I had come recommended for a role on the sysadmin team . In the course of these 15 minutes he outlined some of the issues he thought a new member of DSA would face and asked me if I thought I could help. My reply was cautiously positive, saying that I didn t want to step on anybody s toes but maybe I could be of some assistance. And that was the first and last of it, until some fine November day two years later I got an email from Phil Hands saying I ve just added you to the adm group, and added you to the debian-admin@d.o alias. and welcome on board . *blink* What!? My teammates at the time were James Troup (elmo), Phil Hands (fil), Martin Joey Schulze and Ryan Murray (neuro). The old team, while apparently not on good terms with one another, was however still around to do heavy lifting when required. I still remember when on my first or second day on the team two disks failed in the raid5 of ftp-master.debian.org aka ries. Neuro did the reinstall once new disks had arrived at Brown University. I m sure I d have been way out of my league had this job fallen to me. Fortunately my teammates were all willing and able to help me find whatever pieces of information existed that might help me learn how debian.org does its stuff. Unfortunately a lot of it only existed in various heads, or when lucky, in one of the huge mbox archives of the debian-admin alias or list. Anyway, soon I was able to get my hands dirty with upgrading from sarge to etch, which had been released about half a year earlier. Raphael: I know the DSA team has accomplished a lot over the last few years. Can you share some interesting figures? Peter: Indeed we have accomplished a lot. In my opinion the most important of these accomplishment is that we re actually once again a team nowadays. A team where people talk to one another and where nobody should be a SPoF. Since this year s debconf we are six people in the admin team: Tollef Fog Heen (Mithrandir) and Faidon Liambotis (paravoid) joined the existing members: Luca Filipozzi, Stephen Gran, Martin Zobel-Helas, and myself. Growing a core team, especially one where membership comes with uid0 on all machines, is not easy and that s why I m very glad we managed to actually do this step. I also think the infrastructure and our workflows have matured well over the last four years. We now have essential monitoring as a matter of course: Nagios not only checks whether all daemons that should be running are in fact running, but it also monitors hardware health of disks, fans, etc. where possible. We are alerted of outstanding security updates that need to be installed and of changes made to our systems that weren t then explicitly acked by one of us. We have set up a centralized configuration system, puppet, for some of our configuration that is the same, or at least similar, on all our machines. Most, if not all, pieces of software, scripts and helpers that we use on debian.org infrastructure is in publicly accessible git repositories. We have good communication with other teams in Debian that need our support, like the ftp folks or the buildd people. As for figures, I don t think there s anything spectacular. As of the time of our BoF at this year s DebConf, we take care of approximately 135 systems, about 100 of them being real iron, the other virtual machines (KVM). They are hosted at over 30 different locations, tho we are trying to cut down on that number, but that s a long and difficult process. We don t really collect a lot of other figures like web hits on www.debian.org or downloads from the ftp archive. The web team might do the former and the latter is pretty much impossible due to the distributed nature of our mirrors, as you well know. Raphael: The DSA team has a policy of eating its own dog food, i.e. you re trying to rely only on what s available in Debian. How does that work out and what are the remaining gaps? Peter: Mostly Debian, the OS, just meets our needs. Sure, the update frequency is a bit high, we probably wouldn t mind a longer release cycle. But on the other hand most software is recent enough. And when it s not, that s easy to fix with backports. If they aren t on backports.debian.org already, we ll just put them there (or ask somebody else to prepare a backport for us) and so everybody else benefits from that work too. Some things we need just don t, and probably won t, exist in Debian. These are mainly proprietary hardware health checks like HP s tools for their servers, or various vendors programs to query their raid controller. HP actually makes packages for their stuff which is very nice, but other things we just put into /usr/local, or if we really need it on a number of machines, package ourselves. The push to cripple our installers and kernels by removing firmware was quite annoying, since it made installing from the official media next to impossible in some cases. Support for working around these limitations has improved with squeeze so that s probably ok now. One of the other problems is that especially on embedded platforms most of the buildd work happens on some variation of development boards, usually due to increased memory and hard disk requirements than the intended market audience. This often implies that the kernel shipped with Debian won t be usable on our own debian.org machines. This makes keeping up with security and other kernel fixes way more error prone and time intensive. We keep annoying the right people in Debian to add kernel flavors that actually boot on our machines, and things are getting better, so maybe in the future this will no longer be a problem. Raphael: If you could spend all your time on Debian, what would you work on? Peter: One of the things that I think is a bit annoying for admins that maintain machines all over the globe is mirror selection. I shouldn t have to care where my packages come from, apt-get should just fetch them from a mirror, any mirror, that is close by, fast and recent. I don t need to know which one it was. We have deployed geodns for security.debian.org a while ago, and it seems to work quite well for the coarse granularity we desired for that setup, but geodns is an ugly hack (I think it is a layer violation), it might not scale to hundreds or thousands of mirrors, and it doesn t play well with DNSSEC. What I d really like to see is Debian support apt s mirror method that I think (and I apologize if I m wronging somebody) Michael Vogt implemented recently. The basic idea is that you simply add deb mirror://mirror.debian.org/ or something like that to your sources.list, and apt goes and asks that server for a list of mirrors it should use right now. The client code exists, but I don t know how well tested it is. What is missing is the server part. One that gives clients a mirror, or list of mirrors, that are close to them, current, and carry their architecture. It s probably not a huge amount of work, but at the same time it s also not entirely trivial. If I had more time on my hands this is something that I d try to do. Hopefully somebody will pick it up. Raphael: What motivates you to continue to contribute year after year? Peter: It s fun, mostly. Sure, there are things that need to be done regularly that are boring or become so after a while, but as a sysadmin you tend to do things once or twice and then seek to automate it. DSA s users, i.e. DDs, constantly want to play with new services or approaches to make Debian better and often they need our support or help in their endeavors. So that s a constant flow of interesting challenges. Another reason is that Debian is simply where some of my friends are. Working on Debian with them is interacting with friends. I not only use Debian at debian.org. I use it at work, I use it on my own machines, on the servers of the Tor project. When I was with OFTC Debian is what we put on our machines. Being a part of Debian is one way to ensure what Debian releases is actually usable to me, professionally and with other projects. Raphael: Is there someone in Debian that you admire for their contributions? Peter: That s a hard one. There are certainly people who I respect greatly for their technical or other contributions to Debian, but I don t want to single anybody out in particular. I think we all, everyone who ever contributed to Debian with code, support or a bug report, can be very proud of what we are producing one of the best operating systems out there.
Thank you to Peter for the time spent answering my questions. I hope you enjoyed reading his answers as I did. Subscribe to my newsletter to get my monthly summary of the Debian/Ubuntu news and to not miss further interviews. You can also follow along on Identi.ca, Twitter and Facebook.

No comment Liked this article? Click here. My blog is Flattr-enabled.

10 August 2011

Tollef Fog Heen: Test post

Sorry about this, debugging planet.debian.

3 August 2011

Tollef Fog Heen: libvmod_curl using cURL from inside Varnish Cache

It's sometimes necessary to be able to access HTTP resources from inside VCL. Some use cases include authentication or authorization where a service validates a token and then tell Varnish whether to proceed or not. To do this, we recently implemented libvmod_curl which is a set of cURL bindings for VCL so you can fetch remote resource easily. HTTP would be the usual method, but cURL also supports other protocols such as LDAP or POP3. The API is very simple, to use it you would do something like:
require curl;
sub vcl_recv  
    curl.fetch("http://authserver/validate?key=" + regsub(req.url, ".*key=([a-z0-9]+), "\1"));
    if (curl.status() != 200)  
        error 403 "Go away";
     
 
Other methods you can use are curl.header(headername) to get the contents of a given header and curl.body() to get the body of the response. See the README file in the source for more information.

29 May 2011

Dirk Eddelbuettel: Bike The Drive 2011

Memorial Day weekend so following the annual JPMorgan Chase CC run on Thursday, it was once again time for the annual Bike The Drive on Chicago's beautiful Lakeshore Drive. The weather was craptastic---following a fog (!!) warning we had dense fog along the lake making it somewhat moist and chilly. Once we were done this was followed by an impressive thunderstorm warnings and rather heavy rain. But the girls were troopers and we did both halfes for a total of thirty miles as shown on this Google Maps record of the ride.

21 May 2011

Tollef Fog Heen: Upgrading Alioth

A while ago, we got another machine for hosting Alioth and so we started thinking about how to use that machine. It's a used machine and not massively faster than the current hardware, so just moving everything over wouldn't actually get us that much of a performance upgrade. However, Alioth is using FusionForge, which is supposed to be able to run on a cluster of machines. After all, this was originally built for SourceForge.net, which certainly does not run on a single host. So, a split of services is what we'll do. This weekend, we're having a sprint in Collabora's office in Cambridge, actually implementing the split and doing a bit of general planning for the future. Last afternoon (Friday), European time, we started the migration. The first step is to move all the data off the Xen guest on wagner, where Alioth is currently hosted. This finished a few minutes ago; it turns out syncing about 8.5 million files across almost 400G of data takes a little while. The new host is called vasks and will host the database, run the main apache and be the canonical location for the various SCM repositories. We are not decomissioning wagner, but it'll be reinstalled without Xen or other virtualisation which should help performance a bit. It'll host everything that has lower performance requirements such as cron jobs, mailing lists and so on. I'll try to keep you all updated and feel free to drop by #alioth on irc.debian.org if you have any questions.

Next.

Previous.